UMBC High Performance Computing Facility : Choosing a Compiler and MPI Implementation
This page last changed on Feb 03, 2009 by straha1.
Table of ContentsChoosing a LanguageAll MPI implementations on HPC have bindings for the following languages:
The four languages have support for roughly the same set of MPI features. The C++ bindings include both object-oriented and non-object oriented bindings and make use of C++ namespaces. The Fortran 90 bindings are merely a superset of the Fortran 77 bindings, with added support for complex numbers and other features of Fortran 90. More details about language differences are available at the website for the MPI standard (note though that we only have MPI 1 & 2 implementations – not 2.1). Both available compilers support C, C++, Fortran 77, 90 and 95. Since the MPI implementations also support all four languages, this gives you the freedom to pick the one that is best for the task. Keep in mind that combining Fortran and C++ code can be somewhat complicated (but is doable). Combining C and Fortran is pretty straightforward. Choosing a CompilerHPC currently has two compiler suites:
Both compilers support C, C++, Fortran 77, 90 and 95. They both support all three MPI implementations available on HPC. Significant differences between the two include the following:
If you are using Fortran 90 or 95, it is probably best to use the PGI compiler until GCC's Fortran support matures more. Also, if you need to use several languages in the same program, it is far easier to stick with one compiler. Further details about both compiler suites can be found here: Choosing an MPI ImplementationThe MPI implementations available on HPC are:
Preliminary benchmarks on our cluster indicate that OpenMPI is slower than MVAPICH 1 & 2 in many situations, sometimes much slower. MVAPICH and MVAPICH2 are faster for many, but not all, MPI calls and ranges of message sizes; OpenMPI is faster in some situations. See technical report HPCF-2008-6, MPI Performance on the hpc.rs.umbc.edu Cluster, (PDF) for details. That report was made before we upgraded to OFED 1.3.1, which upgraded MVAPICH2, MVAPICH, OpenMPI and the underlying libraries and drivers. Thus it may not still be valid. You can find more information about these MPI implementations on their websites: |
Document generated by Confluence on Mar 31, 2011 15:37 |